Demystifying generative AI
Editor’s note: Josh Seltzer is CTO and DPO, and Kathy Cheng is CEO of Nexxt Intelligence | inca, Toronto.
Since the rise of generative AI in 2023, ChatGPT gained immense popularity, but what lies beyond ChatGPT? And how will this impact the marketing research industry? In this article, we will discuss the broader landscape of tools and applications that leverage generative AI, highlighting the potential for market research and beyond. We'll also provide tips for market researchers to use AI responsibly, covering privacy and insight provenance.
The power of generative AI: Exploring ChatGPT and beyond
When trained on incomprehensibly large amounts of data, a relatively simple machine learning architecture known as GPT – generative pretrained transformer – becomes a powerful large language model (LLM) which is capable of mimicking human language in truly impressive ways. OpenAI leveraged this technology to build ChatGPT, an LLM accessed via a simple chat interface. While OpenAI, and its product ChatGPT, remain the most prominent player in the generative AI landscape, its success has helped catalyze the industry’s rapid development and commercialization.
ChatGPT: The tip of the iceberg
Large language models like ChatGPT are a core focus of companies such as Google (Gemini), Meta (LLaMA), Anthropic and Cohere. LLMs are most commonly offered as an API, through which anyone can create purpose-built tools tailored to specific needs. Each of the different LLMs has different strengths and weaknesses, and some have open-source licenses allowing a thriving community of experimentation and development. Importantly, generative AI is not limited to LLMs – the GPT architecture is adaptable to many other types of data aside from language, and other machine learning techniques such as diffusion have yielded impressive results in image and video generation.
The expanding landscape of generative AI applications
Generative AI holds immense potential both in its extensibility to other types of data, as well as in the rapidly growing ecosystem which can package generative AI as a building block ready for any project. For example, products such as ChatGPT are increasingly being extended to analyze and generate rich data such as videos or images, as well as structured data such as spreadsheets and code; and a plethora of companies are now specializing in integrating generative AI products into organizational processes.
Beyond these mainstream applications of generative AI, research labs are continuously finding innovative ways to use generative AI for diverse use cases. For example, neuroscience researchers have successfully applied diffusion techniques to reconstruct human vision based on our brain activity (MinD-Vis)! In other cases, GPT has been applied to understand single-cell biology, as well as time series analysis – and earth scientists are hard at work using GPT for geospatial and climate modelling.
Looking ahead: Opportunities to revolutionize the market research industry
The ever-expanding landscape of generative AI presents considerable challenges for incorporating AI in a responsible way, as well as countless exciting opportunities to revolutionize the market research industry. Looking at the state-of-the-art in AI capabilities, there is an abundance of customization and extension waiting for researchers to explore. For example, qualitative research at scale with AI interviews is already available, and ethnography at scale will be made possible with multimodal models. Looking even further, generative AI may push the boundaries of what is possible: perhaps soon traditional trend analysis will be eclipsed by analysis of macro-scale trends, leveraging generative AI to decipher the underlying dynamics and social factors that make us who we are.
Navigating the responsible use of AI tools in market research
Concerns such as privacy have become increasingly prevalent with the rise of AI-based tools across industries. While AI presents tremendous potential and promises improved experiences and insights, these concerns cannot be ignored.
Privacy and AI
While privacy is increasingly important in an age of generative AI, common sense and existing frameworks still apply when working with AI-powered platforms. GDPR legislation requires that companies accessing personal information must be transparent and are responsible for ensuring that data is only shared with authorized parties. By reading the privacy policy and data processing addendum provided by generative AI companies, you can clearly understand who data is being shared with and how it is being used. For example, while OpenAI reserves the right to use data collected through ChatGPT for training purposes, it offers the ability to opt out as well as much tighter controls in its ChatGPT Enterprise offering.
Responsible AI application
Generative AI is prone to error and is fundamentally limited in ways that may not be obvious. Thus, industry leaders need to adopt a wise and empathetic approach when utilizing AI tools, ensuring that AI serves as a helpful tool, offering insights and facilitating interactions while respecting privacy and ethical considerations. It is crucial that AI applications are designed under the assumption that AI will make mistakes: guardrails, human-in-the-loop review and well-defined fallback processes are some of the tools that can be implemented to keep AI usage in check.
AI and marketing research industry-specific concerns
Certain industries exhibit heightened privacy concerns. Health care, for instance, necessitates a heightened focus on data usage and adverse effects. Legal requirements and responsible reporting are crucial in this field to protect patient privacy and address any negative consequences. Additionally, concerns about question appropriateness in research settings can be addressed through careful design and adherence to existing protocols.
Looking ahead: Responsible adoption of AI
Privacy concerns surrounding AI in today's evolving technological landscape should not be overlooked. Responsible use and thoughtful consideration are key in leveraging AI in a way that respects privacy and delivers valuable insights in industry-specific contexts. Emerging guidelines and codes of conduct (from ESOMAR and other existing industry associations) will provide essential guidance to ensure the responsible adoption of AI.
Embracing transparent AI adoption
Effective usage of AI ultimately depends on informed and empowered human researchers who can understand the strengths, weaknesses and potential biases of the research methodologies and data sources that they are using. Transparency is crucial when companies communicate about their use of AI.
Synthetic data as a new research tool
One contentious development in the field is the use of synthetic data, where LLMs generate responses to questions without the need for human participants. Initially met with skepticism, the idea of synthetic data has gained traction among researchers who understand that different data sources have their own strengths and limitations. Ensuring that synthetic data is accurately calibrated to real humans, and knowing how to use each source responsibly, will become valuable research skills.
Insight provenance
Insight provenance is a concept adapted from data provenance, which refers to the understanding of how and where insights originate. When incorporating synthetic data into research, it is crucial to provide clear insight provenance – determining the role of AI in the process. By clearly delineating which insights come from synthetic sources and which require human input, market researchers can make informed decisions about the reliability and applicability of the data at hand.
Transparency and accountability
Both technology providers and market researchers have a role to play in ensuring transparency and accountability within AI integration. Technology providers must be transparent about the limitations of synthetic data and provide clear information about its potential implications. On the other hand, market researchers need to maintain data provenance and be accountable for the insights they derive from AI-powered tools. This includes clear communication with clients about the origin and reliability of the insights.
Looking ahead: AI and data integrity
Market research buyers should be proactive about understanding the challenges and opportunities of generative AI and devising a strategy to foster responsible AI usage and to leverage the strengths of different solutions and data sources while maintaining the integrity of research findings. Insight provenance provides a useful lens to think about building trust and impactful insights and will allow market researchers to benefit from the unparalleled opportunities of generative AI.